Early Access: The content on this website is provided for informational purposes only in connection with pre-General Availability Qlik Products.
All content is subject to change and is provided without warranty.
Skip to main content Skip to complementary content

Replicate Loggers

This appendix provides a description of the following Replicate loggers:

 

ADDONS

Only relevant when working with a Replicate add-on. Currently, the only add-ons are user-defined transformations.

ASSERTION

When the log contains an ASSERTION WARNING, it usually means that Replicate detected an anomaly with the data, which might result in replication issues at some stage. These warnings are not exposed in the web console and do not trigger notifications.

COMMON

Writes low level messages such as network activity.

Information note

Not recommended to set to "Trace" as it will write a huge amount of data to the log.

COMMUNICATION

Provides additional information about the communication between Replicate and the Source and Target components. For example, when using Hadoop, it will print the CURL debug messages and the "Apply" of the actual files (Mainly CURL and HTTP Client).

DATA_RECORD

Only available for some endpoints and may be implemented differently for each endpoint. It writes information about each change that occurs. While in Oracle it writes only the header fields, for some endpoints, such as Sybase ASE and IBM DB2 for LUW sources, it will also include the changed data. It records when a specific event was captured as well as the event context.

The content will not be presented; all events will be logged, even when a record is not in the task scope. (This is not relevant for Oracle LogMiner, as the list of objects are propagated to the LogMiner session).

Example

Produce INSERT event: object id 93685 context '0000000003A4649301000001000005190000821F001000010000000003A46427' xid [000000000f9abd46] timestamp '2017-06-07 09:01:58' thread 1 (oracdc_reader.c:2178)

 

DATA_STRUCTURE

Used for internal Replicate data structures and is related to how the code deals with the data and stores it in memory. In general, data structure is a way of organizing data in a computer so that it can be used efficiently.

Information note

Do not set to "Trace" unless specifically requested by Qlik.

FILE_FACTORY

Relevant to Hadoop Target, Amazon Redshift and Microsoft Azure SQL Synapse Analytics. This component is responsible for moving the files from Replicate to the target which, in the case of Hadoop, is the HDFS stage of the task.

FILE_TRANSFER (AKA CIFTA)

Writes to the log when the File Transfer component is used to push files to a specific location.

INFRASTRUCTURE

Records infrastructure information related to the infrastructure layers of Replicate code: ODBC infrastructure, logger infrastructure, PROTO_BUF, REPOSITORY, FILES USE, opening a new thread, closing a thread, saving the task state, and so on.

IO

Logs all IO operations (i.e. file operations), such as checking directory size, creating directories, deleting directories, and so on.

Example:

[IO ]T: scanning 'E:\Program Files\Attunity\Replicate\data\tasks\Task_name/data_files' directory size (at_dir.c:827)

METADATA_CHANGES

Will show the actual DDL changes which are included in the scope (available for specific endpoints).

METADATA_MANAGER

Writes information whenever Replicate reads metadata from the source or target, or stores it. Manage tables metadata, metadata store, and dynamic metadata.

PERFORMANCE

Currently used for latency only. Logs latency values for source and target endpoints every 30 seconds.

REST_SERVER

Handles all REST requests (API and UI). Also shows the interaction between Replicate and Qlik Enterprise Manager.

SERVER

The server thread in the task that communicates with the Replicate Server service on task start, stop, etc. Includes init functions for the task and the task definition.

SORTER

The main component in CDC that routes the changes captured from the source to the target.

Responsible for:

  • Synchronizing Full Load and CDC changes
  • Deciding which events to apply as cached changes
  • Storing the transactions that arrive from the source database until they are committed, and sending them to the target database in the correct order (i.e. by commit time).

Whenever there is a CDC issue such as missing events, events not applied, or unacceptable CDC latency, it is recommended to enable "Verbose" for this logger.

SORTER_STORAGE

SORTER_STORAGE is the storage component of the Sorter which stores transactions (i.e. Changes) in memory and offloads them to disk when the transactions are too large, or unreasonably long. As this logger records a large amount of information, it should only be set to "Trace" if you encounter storage issues such as corrupt swap files, poor performance when offloading changes on disk, and so on.

SOURCE_CAPTURE

This is main CDC component on the source side. As such, it should be used to troubleshoot any CDC source issue. Note that setting to "Verbose" is not recommended unless absolutely necessary as it will record an enormous amount of data to the log.

Some target side components that use the source (e.g. LOB lookup) also use this logger. Setting the logger to “Trace” may be useful for troubleshooting performance issues and other source CDC issues such as bad data read from the source CDC.

SOURCE_LOG_DUMP

When using Replicate Log Reader, this component creates additional files with dumps of the read changes. The logger will write the actual record as it's being captured from the source.

Note that the data will be stored in a separate file and not in the log itself.

SOURCE_UNLOAD

Records source activity related to Full load operations and includes the SELECT statement executed against the source tables prior to Full Load.

STREAM

The Stream is the buffer in memory where data and control commands are kept. There are two types of stream: Data streams and Control streams. In the Data stream, source data is passed to the target or to the Sorter using this memory buffer. In the Control stream, Commands such as Start Full Load and Stop Full Load are passed to components.

As it records a large amount of data, this logger should only be set to "Trace" when a specific stream issue is encountered. Examples of stream issues include poor performance, issues with Control commands (e.g. commands not being performed), issues when loading many tables that may overload the control stream with Control commands, and so on.

STREAM_COMPONENT

Used by the Source, Sorter and Target to interact and communicate with the Stream component.

Example

Force switch to Transactional apply mode for Hadoop target (endpointshell.c:1340)

TABLES_MANAGER

Manage the table status including whether they were loaded into the target, the number of events, how the tables are partitioned, and so on.

TARGET_APPLY

Determines which changes are applied to the target during CDC and is relevant to both the Batch optimized apply and Transactional apply methods. It provides information about all Apply issues including missing events, bad data, and so on. As it usually does not record a lot of data, it can be safely set to "Verbose" in order to troubleshoot issues.

The logged messages will differ according to the target database.

TARGET_LOAD

Provides information about Full Load operations on the target side. Depending on the target, it may also print the metadata of the target table.

TASK_MANAGER

This is the parent task component that manages the other components in the task.

It is responsible for issuing commands to start loading or finish loading a table, create the different components threads, start or stop tasks, and so on.

It is useful for troubleshooting situations such as tables not loading, tables stuck in loading, one of the components not stopping or starting properly, and so on.

TRANSFORMATION

Logs information related to transformations. When set to "Trace", it will log the actual transformations being used by the task.

Example:

In the example below, a new column named "C" was added to the table. The expression is $AR_H_STREAM_POSITION.

[TRANSFORMATION ]T: Transformation on table USER3.TEST1 exists (manipulation_manager.c:511)[TRANSFORMATION ]T: Set transformation for table 'USER3.TEST1' (manipulator.c:596)[TRANSFORMATION ]T: New column 'C', type: 'kAR_DATA_TYPE_STR' (manipulator.c:828)[TRANSFORMATION ]T: Transformation expression is '$AR_H_STREAM_POSITION' (manipulator.c:551)[TRANSFORMATION ]T: Final expression is '$AR_H_STREAM_POSITION' (expression_calc.c:822)

UTILITIES

In most cases, UTILITIES logs issues related to notifications.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!